Artificial Intelligence continues to advance with great potential, yet the risks remain only partly understood. Keeping humans in the loop is essential to operationalize AI effectively and responsibly at scale.

This must-attend conference highlights the technical complexities of AI while providing a practical roadmap to shape, implement and evolve your AI strategy. It brings together key stakeholders across sectors to tackle today’s governance challenges and navigate uncertain risk landscapes.

Join government officials, leading AI ethics and safety thought leaders, business, legal and compliance executives, privacy and IP legal practitioners, data scientists, and more experts at the forefront of trustworthy AI.

What’s New for 2025

Practical Insights: Navigating the Shifting Landscape of U.S. AI Policy

Operationalizing AI Governance: Balancing Innovation, Technology and Legal Considerations

How States are Filling the Void Amidst US Regulatory Uncertainty with Changing Executive Orders

The Next Phase of Operationalizing the EU AI Act: Practical Impact of the Risk Classification System on Your Implementation Strategy

Addressing and Detecting Bias in Algorithms, Testing and AI Outputs: Rethinking Your Approach to Navigating Evolving Legal and Compliance Challenges

Implementing a Cross-Collaborative Approach to AI, Cybersecurity & Privacy

Adopting an Extra-Regulatory, Policy-Neutral Approach to AI Governance-From Setting Standards, Establishing Partnerships to AI Rating Systems and Performance Metrics

View Agenda
article

U.S. Implications of the EU AI Act

The European Union (EU) is leading the global charge with AI regulations. U.S. companies are not beyond their regulatory reach, however, and should be preparing their AI risk mitigation efforts accordingly.

Read Article

Questions?

Email us at [email protected]

Get in Touch

Ready to Register?

Secure your space now.

Register Now